Population Density - Mexico

15 minutes reach

Author

Edgar Daniel

Question: How many people can I reach in 15 minutes or less?

In this notebook, we explore the task of creating a list of points on the map. Apart from this, we seek to place them in such a way that the number of people within the 15-minute isochrone from each point is maximised for a given transport time.

Problem Statement:

Consider the following scenario: you are a retail store entering the Mexican market and developing an expansion plan for Mexico. The C-suite tells you that the strategy is to start by opening ten stores in the country, providing the following guidance:

1.- You must open ten stores in ten different states.

2.- The objective is to maximise the population within a 15-minute radius of each store.

3.- You can select any geographical location; there are no other geographical restrictions.

4.- For a store to be considered part of a state, it must be located within the state, and at least 90% of its isochrone must also be within the state.

Please note that, for the sake of this experiment, we are ignoring geographical, political, legal and security constraints, which would otherwise be applicable for a more realistic approach.

Data Preparation

In this notebook, we explore the task of creating a list of points on the map. Apart from this, we seek to place them in such a way that the number of people within the 15-minute isochrone from each point is maximised for a given transport time.

First the census data:

### Data ---------------------------------------------------------------------


# Censo 2020 nivel manzana 
files <- sprintf("%s/RESAGEBURB_%02s_2020_csv/RESAGEBURB_%02sCSV20.csv"
                , paste0(RAW_DATA,CENSO_FOLDER)
                , 1:32, 1:32)

files_raw <- lapply(files, function(f) {
  if (file.exists(f)) read.csv(f) else NULL
})

censo_df <- do.call(rbind, files_raw[!sapply(files_raw, is.null)]) |>
  # Filter ouit aggregated total for entity, mun, ageb
  filter(ENTIDAD !=0, MUN != 0, LOC != 0, AGEB != '0000', MZA != 0)  |>
  # Select the columns of interest 
  select('ENTIDAD','MUN','LOC','AGEB','MZA', 'POBTOT')  |>
  # Give format to columns and create index CVEGEO
  mutate(
    ENTIDAD = sprintf("%02s", ENTIDAD), 
    MUN = sprintf("%03s", MUN), 
    LOC = sprintf("%04s", LOC), 
    AGEB = sprintf("%04s", AGEB), 
    MZA = sprintf("%03s", MZA),
    CVEGEO = paste0(ENTIDAD, MUN, LOC, AGEB, MZA)
  )

Now we get the shapefiles for each MZA in the country:

# MZN polygons from the whole country 
files <- sprintf("%s/%s/conjunto_de_datos/%02sm.shp"
                 ,paste0(RAW_DATA,MARCOGEO_FOLDER)
                 , listDirectory(paste0(RAW_DATA,MARCOGEO_FOLDER)), 1:32)

shapes <- lapply(files, function(f) {
  if (file.exists(f)) st_read(f, quiet = TRUE) else NULL
})

# Concat all SHP files and delete null geometries 

mzns_shp <- do.call(rbind, shapes[!sapply(shapes, is.null)]) |>
  st_make_valid()

mzns_shp <- mzns_shp[!is.na(st_geometry(mzns_shp)) & !st_is_empty(mzns_shp), ]

Finally, we add both in the same table and assign the correspondig Uber’s H3 Cell ( See documentation ). This in order to better aggregate total population by zone regardless of INEGI’s official stratification.


# Clean up  NAs in population variable
censo_df$POBTOT <- ifelse(is.na(as.numeric(censo_df$POBTOT)),
                          0, as.numeric(censo_df$POBTOT))

# Add geometries to each object 
censo_sf <- censo_df |>
  left_join(mzns_shp
            , by = 'CVEGEO') |>
  st_as_sf(crs = st_crs(mzns_shp)) |>
  mutate(
    # Centroid ,st_point_on_surface to make sure is an inner point
    geometry   = st_point_on_surface(geometry),     
  ) 


# Drop Null Geometries 
censo_sf <- censo_sf[!is.na(st_geometry(censo_sf)) & !st_is_empty(censo_sf), ]

# Add H3 cell corresponding to each point 
censo_sf <- censo_sf |>
  mutate(
    CELL = point_to_cell(st_transform(geometry,4326), res = H3_RESOLUTION), 
  ) |>
  # Just keep the columns to use in the algorithm 
  select(CVEGEO, ENTIDAD, CELL, POBTOT)

# Aggregated at a cell level
agg_cell <- censo_sf |>
  group_by(CELL) |> 
  summarise(
    POBTOT = sum(POBTOT, na.rm = TRUE),
    .groups = "drop"
  ) 

We save the complete sf object as a GeoJSON to do not need to process all data more than once, also save the polygon files of the country states, also save a pre calculated aggregate of population by cell.



# Save the data into a processed folder 
st_write(censo_sf,paste0(PROC_DATA, CENSO_GEOJSON)
         ,layer = CENSO_GEOJSON)
  
st_write(agg_cell,paste0(PROC_DATA, CELLS_GEOJSON)
         ,layer = CELLS_GEOJSON)

Read the cleaned info


# Read the clean data 
censo_sf <- st_read(paste0(PROC_DATA, CENSO_GEOJSON), quiet = TRUE)
cells_sf <- st_read(paste0(PROC_DATA, CELLS_GEOJSON), quiet = TRUE)

# Append cell geometry to aggregate and add entity column 
cells_sf <- cells_sf |> 
  # Assign state to each cell
  left_join(
    censo_sf |> st_drop_geometry() |> select(CELL,ENTIDAD) |> distinct(CELL, .keep_all = TRUE)
            , by = 'CELL') |>
  # Dropping old geoometry 
  st_drop_geometry() |>    
  # H3 geometry in WGS84
  mutate(geometry = cell_to_polygon(CELL)) |>        
   # H3 uses EPSG:4326
  st_as_sf(crs = 4326) 

# Get x, y coordinates from MZN centroid to filter out efficiently later
censo_sf <- censo_sf  |>
   mutate(
    coords = st_coordinates(geometry),
    lon    = coords[,1],
    lat    = coords[,2]
  ) |>
  select(-coords)

# POBTOT TO Numeric
num0 <- function(x) replace_na(suppressWarnings(as.numeric(x)), 0)

censo_sf <- censo_sf |> mutate(POBTOT = num0(POBTOT))
cells_sf <- cells_sf |> mutate(POBTOT = num0(POBTOT))

Let’s see how does the population density looks like in a map:

Population density in Mexican City Metropolitan Area

As we can see, most of the population is highly concentrated in urban areas. Another interesting pattern is that neighbouring cells do not necessarily have a high population. That’s why we should examine the most populated cells in more detail.

The Algorithm

To satisfy the requested business requirements, we can see that, in order to maximise population reach when selecting 10 locations from 10 states, it is sufficient to create a championship by selecting the location with the best population reach in each state, and then selecting the top 10 states, thus satisfying the restrictions and maximising our objective function.

Now, the following algorithm is centered at a state level, knowing that this will be repeated in each state.

The isochrone approximation Given the high cost of computing isochrones, or equivalently, the high cost of using a reliable API, we are going to use a circular buffer approximation based on a 15-minute isochrone from downtown Mexico City (Mexico City being one of the cities with the worst traffic conditions). By doing this, we will underestimate our point selection reach, which is not necessarily a bad thing given that we ultimately want to maximize it.

Particularly, we are taking as a center the following point in Mexico City to represent the approcximation on the algorithm showed on the following diagram:

Isochrone approximation via Inner Circle (ISO15 car in blue, ISO15 motorbike in green)

epsilon-greedy selection algorithm

The algorithm is based on a simple epsilon-greedy heuristic, that is, in each iteration we are looking for maximize the objective function inmediatly over each step regardless of future better selection, giving space for an exploration (random selection). Here the following pseudo-code:

------------------------------------------------------------
Algorithm: State-Level Population Optimization via Isochrones
------------------------------------------------------------

Set MAX_ITERS = 250
Set EPS = 0.5
Set EPS_DECAY = 0.01
Set MIN_EPS = 0.05

Initialize opt_points = []
Initialize opt_values = []
Record start_time

For each state 'ent' in {01, 02, ..., 32}:
    Initialize:
        cells_visited = []
        opt_point = NULL
        opt_value = -Infinity
        iters = 0
        eps = EPS
        values_hist = []

    Create ordered list of cells for this state:
        cells_list = all cells within ENTIDAD == ent
                     ordered by descending population (POBTOT)
                     unique CELL identifiers only

    While iters < MAX_ITERS:
        Decrease epsilon: eps = max(MIN_EPS, eps - EPS_DECAY)

        Choose exploration vs exploitation:
            If random_number < eps:
                current_cell = random sample from cells_list  # Explore
            Else:
                current_cell = top cell in cells_list         # Exploit

        Try:
            Obtain cell centroid for current_cell
            Extract coordinates (x, y)

            Select census polygons within bounding box:
                filter censo_sf within (x ± inside_radius, y ± inside_radius)
                keep relevant columns
                assign CRS

            Further filter by spatial coverage:
                keep polygons covered by circular buffer centered at cell_centroid

            Compute current_value:
                sum of POBTOT within selected polygons

            If current_value > opt_value:
                opt_point = cell_centroid
                opt_value = current_value

        Catch any errors silently and continue

        Update tracking variables:
            Add current_cell to cells_visited
            Remove visited cells from cells_list
            Append opt_value to values_hist
            Increment iters

        Print iteration progress on same line

    End While

    Append opt_point and opt_value to global opt_points and opt_values
    Record end_time
    Print completion summary for this state with timing info

End For

------------------------------------------------------------
End of Algorithm
------------------------------------------------------------

Let´s run the main loop …


MAX_ITERS <- 250 # Max iterations by state 
EPS <- 0.5 # Epsilon
EPS_DECAY <- 0.01 #   Epsilon decay
MIN_EPS <- 0.05
# NOISE <- # Centroid mnoise in metters 

# nationwide lists 
opt_points <- c()
opt_values <- c()

start <- Sys.time()


for(ent in sprintf("%02s",1:32)){


  # state level values
  cells_visited <- c()
  opt_point <- NULL
  opt_value <- -Inf
  iters <-0 
  eps <- EPS 
  values_hist <- c()
  
  # Generate ordeneate list of cells (by Population)
  cells_list <- cells_sf |> 
    st_drop_geometry() |> filter(ENTIDAD == ent) |>
    select(CELL, POBTOT)  |> arrange(desc(POBTOT)) |>
    distinct() |> pull(CELL) 
  
  while(iters < MAX_ITERS){
    
    # Define if we explore or maximize instantly
    eps <- max(MIN_EPS, eps-EPS_DECAY)
    if(runif(1)<eps){ # Explore
      current_cell <- sample(cells_list,1)
    } else{ # Maximize 
      current_cell <- cells_list[1]
    }
    
    # Create a buffer for the selected cell and evaluate 
    try({
      cell_centroid <- cells_sf |>
    st_transform(st_crs(mzns_shp)) |>
        filter(ENTIDAD == ent, CELL == current_cell) |> st_centroid() 
    
    coords <- cell_centroid |> st_coordinates()
    x <- coords[,1]  
    y <- coords[,2]  
    
    censo_sf_cell_aux <- censo_sf |>
      filter(lon >= x - inside_radius,lon <= x + inside_radius
             , lat >= y - inside_radius,lat <= y + inside_radius) |>
      select(-c(lon, lat))  |>
      st_set_crs(st_crs(mzns_shp)) 
    
    censo_sf_cell_aux <- censo_sf_cell_aux |>
    st_filter(st_buffer(cell_centroid, dist = inside_radius),
                  .predicate = st_covered_by)
    
    
    current_value <- censo_sf_cell_aux |> st_drop_geometry() |>
      pull(POBTOT) |> sum()
    
    
    if(current_value>opt_value) {
      opt_point <- cell_centroid
      opt_value <- current_value
      
    } else{
      opt_point <- opt_point
      opt_value <- opt_value
    }
    
    }, silent = TRUE)
  
    
    
    # Update lists 
    cells_visited <- c(cells_visited,current_cell)
    cells_list  <- cells_list[!cells_list %in% cells_visited]
    opt_point <- opt_point
    opt_value <- opt_value
    values_hist <- c(values_hist,opt_value)
    iters <- iters + 1
    
    cat("\rIteracion ", iters, " completada.")
    
  }
    
  # Update global values 
  opt_points <- c(opt_points,opt_point)
  opt_values <- c(opt_values,opt_value)
  
  end <- Sys.time()
  
  cat("\nEntidad ", ent," con optimo ",opt_value, " completada en ", end - start ," (segs).\n")
  
}

end <- Sys.time()

cat("\nProceso terminado en ", end - start ," (segs).")

Let’s view the results to define the 10 points to use as new store points. Here the top 10 points where to put our stores with the best population estimates:

::: {.cell} ::: {.cell-output-display}

ENTIDAD POBTOT
15 697993
9 689681
14 517050
23 365597
24 362658
21 348812
11 340956
19 340841
30 333821
1 317798

::: :::

National Amalysis

We now calculate the real urban population coverage for the 15 minute isochrone based on the points obtained by the algorithm in the past section using the Isochrones from the HERE API for each one of the selected 10 points:

Code
### Map  -----------------------------------------------------------------------

library(leaflet)
library(sf)


car_iso15 <- st_read(paste0(SAMPLE_DATA,FINAL_ISO_MX_C), quiet = TRUE)
mb_iso15 <- st_read(paste0(SAMPLE_DATA,FINAL_ISO_MX_M), quiet = TRUE)
cells_sf <- st_read(paste0(SAMPLE_DATA,FILE_CENSO_CELLS), quiet = TRUE)


pal <- colorNumeric(
  palette = c("white", "darkblue"),   # use a color vector or palette name
  domain = cells_sf$POBTOT
)

leaflet() |>
  addProviderTiles(providers$CartoDB.Positron) |>
  addPolygons(
    data = cells_sf, 
    color = "transparent",
    weight = 1,
    fillColor = ~pal(POBTOT),
    fillOpacity = 0.5,
    label = ~paste0("POBTOT: ", POBTOT),
  )  |>
  addPolygons(
    data = car_iso15, 
    color = "black",
    weight = 1,
    fillColor = "lightgreen",
    fillOpacity = 0.7,
  ) |>
  addPolygons(
    data = mb_iso15, 
    color = "blue",
    weight = 1,
    fillColor = "lightblue",
    fillOpacity = 0.5,
  ) 

After evaluating the results over the census data, we concluded that 6.59% of total population reported by 2020 Census where inside one of the ten isochrones selected, and if we compare the covered population at an state level we can observe:

::: {.cell} ::: {.cell-output-display}

CVE_MUN total_pop reached_pop pop_pct
002 432205 432123 100
003 614447 587653 96
004 212645 NA NA
005 1173351 744387 63
006 404695 377858 93
007 1835486 1551653 85
008 246428 1909 1
009 128710 NA NA
010 759003 430994 57
011 385280 249894 65
012 685348 118159 17
013 429388 20877 5
014 434153 434153 100
015 545842 529061 97
016 414470 343525 83
017 443704 390920 88

::: :::

We can conclude here that, given the population concentration in small areas, some states, such as 01 (Aguascalientes) and 24 (San Luis Potosí), have high coverage, while others, such as 09 (Mexico City) and 15 (Mexico State), have low coverage given the extent of urban areas across their territories.

Mexico City reach

Lets make a small twist to the problem, and thereby the algorithm, to solve a similar problem inside a particular city.

Now we want to cover the most of population in Mexico City area by choosing 7 storage locations maximizing the amount of people inside a 15 minute isochrone, with the following conditions:

  1. Open seven stores around the city.

  2. The objective is to maximise the population within a 15-minute radius of each store.

  3. You can select any location within CDMX; there are no geographical restrictions.

  4. If two different isochrones intersect, the number of people reached is counted just once.

The modified Algorithm

The algorithm for this part uses the same heuristic logic, employing the circular buffer to approximate the final isochrones and moving from cell to cell until the optimal location for the storage units around the city is found, while penalizing overlapping isochrone areas by not counting overlapping area population. The following is the pseudo code for the algorithm:

------------------------------------------------------------
Algorithm: City level Population Optimization via k Isochrones
------------------------------------------------------------

Set MAX_ITERS = 750
Set EPS = 0.5
Set EPS_DECAY = 0.01
Set MIN_EPS = 0.05
Set STORAGE_UNITS = 7

initialize iteration counter iters = 1
initialize epsilon = EPS
initialize empty lists:
    opt_points, opt_cells, opt_values
    isochrones_list, cvegeo_occupied
initialize covered_cvegeos = empty set
initialize min_idx = null
initialize min_value = -infinity

while iters < MAX_ITERS:

    # --- EPSILON GREEDY SELECTION ---
    decrease epsilon by EPS_DECAY but not below MIN_EPS
    generate random number r in [0,1]

    if r < epsilon:
        # explore
        current_cell = random choice from remaining cells_list
    else:
        # exploit
        current_cell = best available cell in cells_list (first position)

    # --- MAIN LOOP (TRY BLOCK) ---
    try:
        # compute centroid of current_cell
        cell_centroid = centroid of cell

        # create circular buffer around centroid
        point_buffer = buffer(cell_centroid, inside_radius)

        # find census polygons within buffer AND not already used
        censo_subset = polygons within bounding box of centroid
        censo_subset = polygons covered by buffer
        censo_subset = polygons not in covered_cvegeos

        # compute value = total population inside this buffer
        current_value = sum( population in censo_subset )

        # --- CASE 1: STORAGE ALREADY FULL ---
        if iters > STORAGE_UNITS:

            # if new location is better than the current worst
            if current_value > min_value:

                replace worst slot (min_idx) with:
                    opt_points[min_idx] = cell_centroid
                    opt_cells[min_idx]  = current_cell
                    opt_values[min_idx] = current_value
                    isochrones_list[min_idx] = point_buffer
                    cvegeo_occupied[min_idx] = set of CVEGEO in censo_subset

                # recompute covered_cvegeos from all stored units
                covered_cvegeos = union of all cvegeo_occupied[i]

                # recompute worst slot
                min_value = +infinity
                for i from 1 to STORAGE_UNITS:
                    if opt_values[i] < min_value:
                        min_value = opt_values[i]
                        min_idx = i

        # --- CASE 2: FILLING INITIAL STORAGE SLOTS ---
        else:

            store:
                opt_points[iters] = cell_centroid
                opt_cells[iters]  = current_cell
                opt_values[iters] = current_value
                isochrones_list[iters] = point_buffer
                cvegeo_occupied[iters] = set of CVEGEO in censo_subset

            add newly occupied CVEGEO to covered_cvegeos

            # update current best (max value)
            if current_value > min_value:
                min_value = current_value
                min_idx = iters

    except:
        continue

    # --- UPDATE STATE FOR NEXT ITERATION ---
    add current_cell to cells_visited
    remove current_cell from cells_list
    append sum(opt_values) to values_hist
    iters = iters + 1

end while

------------------------------------------------------------
End of Algorithm
------------------------------------------------------------

Now, let’s run the past algorithm to get the selected storage units around Mexico City that maximizes the total population inside the 15 minutes isochrones:


# We filter the CDMX data to focus on the area of interest 
cdmx_cells_sf <- cells_sf |> filter(ENTIDAD == '09')
cdmx_poly <- st_read(paste0(SAMPLE_DATA,SHP_CDMX), quiet = TRUE)
censo_sf <-  censo_sf |> st_set_crs(st_crs(mzns_shp))

options(warn = -1) # Deactivate Warnings 

MAX_ITERS <- 750 # Max iterations 
EPS <- 0.5 # Epsilon
EPS_DECAY <- 0.1 #   Epsilon decay
MIN_EPS <- 0.05
STORAGE_UNITS <- 7

#  lists 
opt_points <-  list()
opt_cells  <- list()
opt_values <- numeric()
isochrones_list <- list()
cvegeo_occupied <- list()

# Value initiations 
cells_visited <- c()
min_idx <- NULL # min density index 
min_value <- -Inf # min density value
iters <- 1  
eps <- EPS 
values_hist <- c()
covered_cvegeos <- c()


# Get the city frontier 
city_boundary <- st_union(st_geometry(cdmx_poly))
# only allow centers at least D metres from the frontier
D_min <- as.integer(inside_radius*0.3)
  
# Generate ordeneate list of cells (by Population)
cells_list <- cdmx_cells_sf |>
  mutate(
    dist_border = as.numeric(st_distance(st_transform(geometry,st_crs(mzns_shp))
                            , st_boundary(city_boundary)),
                             by_element = TRUE)
    ) |> 
    st_drop_geometry() |>
  filter(dist_border > D_min) |>
    select(CELL, POBTOT)  |> arrange(desc(POBTOT)) |>
    distinct() |> pull(CELL) 

MAX_ITERS <- min(MAX_ITERS,length(cells_list))

while (iters < MAX_ITERS) {
  
  # Epsilon-greedy: explore vs exploit
  eps <- max(MIN_EPS, eps - EPS_DECAY)
  if (runif(1) < eps) {        # Explore
    current_cell <- sample(cells_list, 1)
  } else {                     # Exploit (best remaining)
    current_cell <- cells_list[1]
  }
  
  # Main Loop
  try({
    cell_centroid <- cdmx_cells_sf |>
      filter(CELL == current_cell) |>
      slice(1) |>              
      st_transform(st_crs(mzns_shp)) |>
      st_centroid()
    
    # Circular buffer 
    point_buffer <- st_buffer(cell_centroid, dist = inside_radius)
    
    coords <- cell_centroid |> st_coordinates()
    x <- coords[,1]  
    y <- coords[,2]  
    
    # Census polygons covered by this buffer and not already occupied
    censo_sf_cell_aux <- censo_sf |>
      filter(lon >= x - inside_radius,lon <= x + inside_radius , 
              lat >= y - inside_radius,lat <= y + inside_radius , 
              !CVEGEO %in% covered_cvegeos
             ) |>
      st_filter(point_buffer$geometry, .predicate = st_covered_by) |> 
      select(-c(lon, lat)) 

    
    # Get value 
    current_value <- censo_sf_cell_aux |>
      st_drop_geometry() |>
      pull(POBTOT) |>
      sum(na.rm = TRUE)
    
    # Compite if we have all storages assigned
    if (iters > STORAGE_UNITS) {
      # Replace worst if we improved it
      if (current_value > min_value) {
        
        opt_points[[min_idx]] <- cell_centroid
        opt_values[min_idx] <- current_value
        opt_cells[[min_idx]] <- current_cell
        isochrones_list[[min_idx]] <- point_buffer
        cvegeo_occupied[[min_idx]] <- censo_sf_cell_aux |>
          st_drop_geometry() |>
          select(CVEGEO) |>
          distinct() |>
          pull(CVEGEO)
        
        covered_cvegeos <- c()
        for(i in 1:STORAGE_UNITS){
          covered_cvegeos <- c(covered_cvegeos, 
            cvegeo_occupied[[i]]
          )
        }
        
        # Recompute worst 
        min_value <- Inf
        for (i in 1:STORAGE_UNITS) {
          val <- opt_values[[i]]
          if (val < min_value) {
            min_idx <- i
            min_value <- val
          }
        }
      }
    } else {
      # Filling first slots
      opt_points[[iters]] <- cell_centroid
      opt_values[iters] <- current_value
      opt_cells[[iters]] <- current_cell
      isochrones_list[[iters]] <- point_buffer
      cvegeo_occupied[[iters]] <- censo_sf_cell_aux |>
        st_drop_geometry() |>
        select(CVEGEO) |>
        distinct() |>
        pull(CVEGEO)
      
      covered_cvegeos <- c(covered_cvegeos, 
            cvegeo_occupied[[iters]]
          )
      
      if (current_value > min_value) {
        min_value <- current_value
        min_idx <- iters
      }
    }
  }, silent = FALSE)
  
  # Update lists and values  
  cells_visited <- c(cells_visited, current_cell)
  cells_list <- cells_list[!cells_list %in% cells_visited]
  values_hist <- c(values_hist, sum(opt_values))
  iters <- iters + 1
  
}

    

A 15 minutes city

Now, let’s see what the algorithm’s results show about the business problem given at the beginning of the section.

The selected location with their respective POBTOT approximation via the circular buffer are:

Rows: 7 Columns: 2
── Column specification ────────────────────────────────────────────────────────
Delimiter: ","
dbl (2): lng, lat

ℹ Use `spec()` to retrieve the full column specification for this data.
ℹ Specify the column types or set `show_col_types = FALSE` to quiet this message.
lng lat
-99.13293 19.38745
-99.04301 19.31666
-99.14326 19.33236
-99.04204 19.37103
-99.11010 19.46679
-99.22062 19.35752
-99.18120 19.46918

If we calculate the HERE API isochrones , we can map out the actual coverage for each one of the selected 15 minutes isochrones :


Attaching package: 'scales'
The following object is masked from 'package:purrr':

    discard
The following object is masked from 'package:readr':

    col_factor

Overall, we can conclude that 67.94% of CDMX’s population can be reached within 15 minutes by using only seven strategic points across the city. This opens the door to many solutions in urban mobility, retail commerce, healthcare, and public policy.

As we can see from the map, the most populated area is in the northern part of the city. This is because the southern part of the city has forest and lake areas, which is why the population is concentrated in a few municipalities. We can take advantage of this situation by placing strategic points to ensure full coverage. The following table shows the coverage percentage by municipality.

Population coverage (%) by municipallity.